Tristan Agion
Viziam Amarr Empire
46
|
Posted - 2016.01.23 10:45:34 -
[1] - Quote
I've tried this, in the hope that it could be something to do for example when mining. (I do not quite feel so morally obliged to help science because, well, I am a scientist... )
Here's why I won't play this as it is now. Please feel free to correct me where I overlooked something, I played with this for only a short while...
1. The window apparently cannot be scaled up. I'm squinting at a less than postcard size interface. I see no reason why this could not be magnified to whatever size I'm comfortable with on my reasonably large screen.
2. The pictures / texts on the side, which are supposed to help feature selection are insufficient to misleading:
a. The descriptions are way too short, general and vague. We need proper instructions what to look for, and in particular how to differentiate it against other, similar looking cases.
b. There is a single example picture. We should have access to a dozen of examples that show the possible variety one might encounter. And sometimes these pictures are just misleading because of the lacking variety. For example, there is one with a cloud of green dots on black, but you can score correctly with the cloud of dots on say red filament. I got some "good" ticks for that. Later in the "crowd phase" I ended up being the only one ticking this. I'm not sure that I am right, of course, but I think people just got misled into thinking that the picture has to have a black background. Likewise there is colour variation, e.g., yellow instead of red filament etc.
3. The training phase with "worked examples" is way too short, and does not provide proper feedback. I was barely going over 50% by the time I got in the community phase. That gives one no confidence at all in what one is submitting. I think nobody should get into the communal scoring unless they can pass a say 70-80% threshold. And there is no real feedback, just "this was right, this was wrong, this is what you should have clicked" according to tick marks. We do not have anything like "no, this is not that, because it is not stretched our like this, see?" But this is really vital for learning (and could be automated to a large degree).
4. The community scoring phase seems like a good idea, but I found it entirely demotivating. The problem is that while I intellectually knew that I probably was doing about as well as before (so not very well, somewhat above 50%), psychologically seeing your mismatches with the community in percentage numbers just makes you feel bad. A result sheet covered in 0%-30% doesn't make me think "well, if there are a lot of people with only a 50% rate, then there will be a lot of scatter of error", it makes me think "wow, I suck". There's no positive feedback there unless one tries to chase the crowd percentage, which frankly I don't think is the point. (Basically I would not be training myself to identify the patterns, but to identify what the majority of people likes to identify, which is not the same at all unless it is a group of experts.)
It is at this point that I stopped after a few rounds, thinking "this is really useless". I think there really have to be changes in this phase, or I will not be playing this. We basically need "correct" feedback, even it is delayed. So have some expert score the communal ones, and when he or she gets around to one we have done, we get that as feedback. Not what other amateurs with likely similar bad hit rates of 50% are guessing.
So overall my comment would be: nice idea, bad UI and documentation, and the progression into communal scoring needs serious work. It's nor ready for TQ at all in my opinion. |